In [17]:
from ht import *
import matplotlib.pyplot as plt
from math import *
% matplotlib inline
import numpy as np
from scipy.interpolate import *

In [16]:
%timeit Ft_aircooler(Thi=125., Tho=45., Tci=25., Tco=95., Ntp=1, rows=4)


The slowest run took 7.27 times longer than the fastest. This could mean that an intermediate result is being cached 
10000 loops, best of 3: 15.8 µs per loop

In [3]:
%timeit Rohsenow(Te=4.9, Cpl=4217., kl=0.680, mul=2.79E-4, sigma=0.0589, Hvap=2.257E6, rhol=957.854, rhog=0.595593, Csf=0.011, n=1.26)*4.9


The slowest run took 10.18 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 1.36 µs per loop

In [4]:
%timeit Stephan_Abdelsalam(Te=16.2, Tsat=437.5, Cpl=2730., kl=0.086, mul=156E-6, sigma=0.0082, Hvap=272E3, rhol=567, rhog=18.09, angle=35, correlation='hydrocarbon')


100000 loops, best of 3: 5.34 µs per loop

In [5]:
%timeit Serth_HEDH(D=0.0127, sigma=8.2E-3, Hvap=272E3, rhol=567, rhog=18.09)


The slowest run took 5.68 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 1.97 µs per loop

In [6]:
%timeit Nusselt_laminar(Tsat=370, Tw=350, rhog=7.0, rhol=585., kl=0.091, mul=158.9E-6, Hvap=776900, L=0.1)


The slowest run took 7.61 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 1.97 µs per loop

In [7]:
%timeit Boyko_Kruzhilin(m=100, rhog=6.36, rhol=582.9, kl=0.098, mul=159E-6, Cpl=2520., D=0.03, x=0.85)


The slowest run took 5.01 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 2.43 µs per loop

In [8]:
%timeit S_isothermal_pipe_eccentric_to_isothermal_pipe(.1, .4, .05, 10)


The slowest run took 10.75 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 1.31 µs per loop

In [9]:
%timeit Nu_cylinder_Zukauskas(7992, 0.707, 0.69)


The slowest run took 9.77 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 903 ns per loop

In [10]:
%timeit Nu_cylinder_Whitaker(6071, 0.7)


The slowest run took 13.29 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 610 ns per loop

In [11]:
%timeit Nu_vertical_cylinder(0.72, 1E7, Method='McAdams, Weiss & Saunders')
%timeit Nu_vertical_cylinder(0.72, 1E7)


The slowest run took 10.26 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 976 ns per loop
100000 loops, best of 3: 4.54 µs per loop

In [12]:
%timeit Nu_horizontal_cylinder(0.72, 1E7)
%timeit Nu_horizontal_cylinder(0.72, 1E7, Method='Morgan')
%timeit Nu_horizontal_cylinder(0.72, 1E7, Method='Churchill-Chu')


The slowest run took 6.47 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 2.62 µs per loop
The slowest run took 5.57 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 898 ns per loop
The slowest run took 7.49 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 1.3 µs per loop

In [13]:
%timeit laminar_T_const()
%timeit 3.66


The slowest run took 12.98 times longer than the fastest. This could mean that an intermediate result is being cached 
10000000 loops, best of 3: 91.8 ns per loop
100000000 loops, best of 3: 8.74 ns per loop

In [14]:
%timeit Nu_conv_internal(Re=1E5, Pr=1.2, fd=0.0185, eD=1E-3)
%timeit Nu_conv_internal(Re=1E5, Pr=1.2, fd=0.0185, eD=1E-3, AvailableMethods=True)
print Nu_conv_internal(Re=1E5, Pr=1.2, fd=0.0185, eD=1E-3, AvailableMethods=True)


The slowest run took 4.60 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 5.44 µs per loop
100000 loops, best of 3: 3.19 µs per loop
['Churchill-Zajic', 'Petukhov-Kirillov-Popov', 'Gnielinski', 'Sandall', 'Webb', 'Friend-Metzner', 'Prandtl', 'von-Karman', 'Gowen-Smith', 'Kawase-Ulbrecht', 'Kawase-De', 'Dittus-Boelter', 'Sieder-Tate', 'Drexel-McAdams', 'Colburn', 'ESDU', 'Gnielinski smooth low Pr', 'Gnielinski smooth high Pr', 'Bhatti-Shah', 'Dipprey-Sabersky', 'None']

In [15]:
%timeit Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6)
%timeit Lehrer(m=2.5, Dtank=0.6, Djacket=0.65, H=0.6, Dinlet=0.025, dT=20., rho=995.7, Cp=4178.1, k=0.615, mu=798E-6, muw=355E-6, inlettype='radial', isobaric_expansion=0.000303)


The slowest run took 8.02 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 2.23 µs per loop
The slowest run took 5.12 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 2.75 µs per loop

In [16]:
%timeit Nu_packed_bed_Gnielinski(dp=8E-4, voidage=0.4, vs=1, rho=1E3, mu=1E-3, Pr=0.7)


The slowest run took 7.15 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 1.8 µs per loop

In [17]:
%timeit dP_Kern(m=11., rho=995., mu=0.000803, mu_w=0.000657, DShell=0.584, LSpacing=0.1524, pitch=0.0254, Do=.019, NBaffles=22)
%timeit dP_Zukauskas(Re=13943., n=7, ST=0.0313, SL=0.0343, D=0.0164, rho=1.217, Vmax=12.6)
%timeit dP_Zukauskas(Re=13943., n=7, ST=0.0313, SL=0.0313, D=0.0164, rho=1.217, Vmax=12.6)


The slowest run took 6.01 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 15.8 µs per loop
The slowest run took 4.98 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 10.8 µs per loop
The slowest run took 4.29 times longer than the fastest. This could mean that an intermediate result is being cached 
100000 loops, best of 3: 10.9 µs per loop

In [18]:
%timeit LMTD(100., 60., 30., 40.2)
%timeit LMTD(100., 60., 30., 40.2, counterflow=False)


The slowest run took 10.41 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 481 ns per loop
The slowest run took 5.76 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 496 ns per loop

In [21]:
%timeit [[Ntubes_Perrys(DBundle=1.184, Ntp=i, do=.028, angle=j) for i in [1,2,4,6]] for j in [30, 45, 60, 90]]
%timeit [[Ntubes_VDI(DBundle=1.184, Ntp=i, do=.028, pitch=.036, angle=j) for i in [1,2,4,8]] for j in [30, 45, 60, 90]]
%timeit [Ntubes_Phadkeb(DBundle=1.200-.008*2, do=.028, pitch=.036, Ntp=i, angle=45.) for i in [1,2,4,6,8]]


The slowest run took 6.56 times longer than the fastest. This could mean that an intermediate result is being cached 
10000 loops, best of 3: 20.6 µs per loop
10000 loops, best of 3: 47.9 µs per loop
1000 loops, best of 3: 438 µs per loop

In [2]:
from ht.insulation import ASHRAE_k, ASHRAE, materials_dict
%timeit [ASHRAE_k(ID) for ID in ASHRAE]
print len(ASHRAE)


The slowest run took 4.59 times longer than the fastest. This could mean that an intermediate result is being cached 
10000 loops, best of 3: 82.6 µs per loop
223

In [13]:
a = 'Bitumen'
print nearest_material(a)
%timeit nearest_material(a)


Bitumen, pure
1000 loops, best of 3: 710 µs per loop

In [15]:
%timeit blackbody_spectral_radiance(800., 4E-6)
%timeit q_rad(.85, 400, 305.)


The slowest run took 25.05 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 876 ns per loop
The slowest run took 8.85 times longer than the fastest. This could mean that an intermediate result is being cached 
1000000 loops, best of 3: 458 ns per loop

Conclusion: From 30 ms to 100 ns, not that wide of a range; only 5 orders of magnitude. String matching is definitely slow, which I know. Definitely menu-driven selection is prefered. Some complicated functions take a decently long time to work, also not a surprise. A better alternative to inter1d and interp2d exists; and is equally easy to use. As a bonus, it provides a degree of smoothing and more control. The overhead of dealing with strings is not large, but it may still be worth moving out in favor of dictionary comparisons. By and large, the library is still quick, for Python anyway.